: When Is a Robot a Moral Agent ?
نویسنده
چکیده
In this paper I argue that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when one can analyze or explain the robot’s behavior only by ascribing to it some predisposition or ‘intention’ to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moral agent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons.
منابع مشابه
Should Robots Kill? Moral Judgments for Actions of Artificial Cognitive Agents
Moral dilemmas are used to study the situations in which there is a conflict between two moral rules: e.g. is it permissible to kill one person in order to save more people. In standard moral dilemmas the protagonist is a human. However, the recent progress in robotics leads to the question of how artificial cognitive agents should act in situations involving moral dilemmas. Here, we study mora...
متن کاملFree Will and Moral Responsibility in Islamic Philosophy
According to a common view among Muslim philosophers, a moral agent has free will if and only if she is able to do an action when she wants to and is able to avoid it when she wants otherwise. Implicit in this view is the Principle of Alternative Possibilities (PAP). On the other hand, according to this view, free will is dependent on requirements such as conception, judgement, tendency, decisi...
متن کاملMoral Action Changes Mind Perception for Human and Artificial Moral Agents
Mind perception is studied for three different agents: a human, an artificial human, and a humanoid robot. The artificially created agents are presented as being undistinguishable from a human. Each agent is rated on 15 mental capacities. Three mind perception dimensions are identified Experience, Agency, and Cognition. The artificial agents are rated higher on the Cognition dimensions than on ...
متن کاملMoral Competence in Robots?
I start with the premise that any social robot must have moral competence. I offer a framework for what moral competence is and sketch the prospects for it to be developed in artificial agents. After considering three proposals for requirements of “moral agency” I propose instead to examine moral competence as a broader set of capacities. I posit that human moral competence consists of five com...
متن کاملShould moral decisions be different for human and artificial cognitive agents?
Moral judgments are elicited using dilemmas presenting hypothetical situations in which an agent must choose between letting several people die or sacrificing one person in order to save them. The evaluation of the action or inaction of a human agent is compared to those of two artificial agents – a humanoid robot and an automated system. Ratings of rightness, blamefulness and moral permissibil...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007